The Cyclic Block Conditional Gradient Method for Convex Optimization Problems
نویسندگان
چکیده
In this paper we study the convex problem of optimizing the sum of a smooth function and a compactly supported non-smooth term with a specific separable form. We analyze the block version of the generalized conditional gradient method when the blocks are chosen in a cyclic order. A global sublinear rate of convergence is established for two different stepsize strategies commonly used in this class of methods. Numerical comparisons of the proposed method to both the classical conditional gradient algorithm and its random block version demonstrate the effectiveness of the cyclic block update rule.
منابع مشابه
Conditional Gradient Sliding for Convex Optimization
In this paper, we present a new conditional gradient type method for convex optimization by utilizing a linear optimization (LO) oracle to minimize a series of linear functions over the feasible set. Different from the classic conditional gradient method, the conditional gradient sliding (CGS) algorithm developed herein can skip the computation of gradients from time to time, and as a result, c...
متن کاملDuality between subgradient and conditional gradient methods
Given a convex optimization problem and its dual, there are many possible firstorder algorithms. In this paper, we show the equivalence between mirror descent algorithms and algorithms generalizing the conditional gradient method. This is done through convex duality and implies notably that for certain problems, such as for supervised machine learning problems with nonsmooth losses or problems ...
متن کاملInexact block coordinate descent methods with application to the nonnegative matrix factorization
This work is concerned with the cyclic block coordinate descent method, or nonlinear Gauss-Seidel method, where the solution of an optimization problem is achieved by partitioning the variables in blocks and successively minimizing with respect to each block. The properties of the objective function that guarantee the convergence of such alternating scheme have been widely investigated in the l...
متن کاملConditional gradient type methods for composite nonlinear and stochastic optimization
In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...
متن کاملA Stochastic Quasi-Newton Method for Online Convex Optimization
We develop stochastic variants of the wellknown BFGS quasi-Newton optimization method, in both full and memory-limited (LBFGS) forms, for online optimization of convex functions. The resulting algorithm performs comparably to a well-tuned natural gradient descent but is scalable to very high-dimensional problems. On standard benchmarks in natural language processing, it asymptotically outperfor...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM Journal on Optimization
دوره 25 شماره
صفحات -
تاریخ انتشار 2015